In this notebook we try to practice all the classification algorithms that we learned in this course.
We load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.
Lets first load required libraries:
In [61]:
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
from sklearn import preprocessing
%matplotlib inline
This dataset is about past loans. The Loan_train.csv data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:
| Field | Description |
|---|---|
| Loan_status | Whether a loan is paid off on in collection |
| Principal | Basic principal loan amount at the |
| Terms | Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule |
| Effective_date | When the loan got originated and took effects |
| Due_date | Since it’s one-time payoff schedule, each loan has one single due date |
| Age | Age of applicant |
| Education | Education of applicant |
| Gender | The gender of applicant |
Lets download the dataset
In [2]:
!wget -O loan_train.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_train.csv
In [3]:
df = pd.read_csv('loan_train.csv')
df.head()
Out[3]:
In [4]:
df.shape
Out[4]:
In [5]:
df['due_date'] = pd.to_datetime(df['due_date'])
df['effective_date'] = pd.to_datetime(df['effective_date'])
df.head()
Out[5]:
Let’s see how many of each class is in our data set
In [6]:
df['loan_status'].value_counts()
Out[6]:
260 people have paid off the loan on time while 86 have gone into collection
Lets plot some columns to underestand data better:
In [7]:
# notice: installing seaborn might takes a few minutes
!conda install -c anaconda seaborn -y
In [8]:
import seaborn as sns
bins = np.linspace(df.Principal.min(), df.Principal.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'Principal', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
In [9]:
bins = np.linspace(df.age.min(), df.age.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'age', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
In [10]:
df['dayofweek'] = df['effective_date'].dt.dayofweek
bins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'dayofweek', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
We see that people who get the loan at the end of the week dont pay it off, so lets use Feature binarization to set a threshold values less then day 4
In [11]:
df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)
df.head()
Out[11]:
Lets look at gender:
In [12]:
df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)
Out[12]:
86 % of female pay there loans while only 73 % of males pay there loan
Lets convert male to 0 and female to 1:
In [13]:
df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
df.head()
Out[13]:
In [14]:
df.groupby(['education'])['loan_status'].value_counts(normalize=True)
Out[14]:
In [15]:
df[['Principal','terms','age','Gender','education']].head()
Out[15]:
In [16]:
Feature = df[['Principal','terms','age','Gender','weekend']]
Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)
Feature.drop(['Master or Above'], axis = 1,inplace=True)
Feature.head()
Out[16]:
Lets defind feature sets, X:
In [17]:
X = Feature
X[0:5]
Out[17]:
What are our lables?
In [18]:
y = df['loan_status'].values
y[0:5]
Out[18]:
Data Standardization give data zero mean and unit variance (technically should be done after train test split )
In [19]:
X= preprocessing.StandardScaler().fit(X).transform(X)
X[0:5]
Out[19]:
In [20]:
temp = (y == 'PAIDOFF')
new_y = temp.astype(int)
Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the model You should use the following algorithm:
Notice:
In [21]:
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, new_y, test_size=0.33, random_state=42)
In [22]:
y_train
Out[22]:
In [67]:
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report, confusion_matrix
best_knn, best_mse, best_score, neigh, best_knn_clf = 0, 1000, 0, None, None
error_rate = []
for i in range(1, 75):
neigh = KNeighborsClassifier(n_neighbors=i)
neigh.fit(X_train, y_train)
predictions = neigh.predict(X_test)
# Get the actual values for the test set.
# mse = (((predictions - y_test) ** 2).sum()) / len(predictions)
# score = neigh.score(X_test, y_test)
accuracy = accuracy_score(predictions, y_test)
error_rate.append(np.mean(predictions != y_test))
if accuracy > best_score: # best_mse > mse or
# best_mse = mse
best_knn = i
best_score = accuracy
best_knn_clf = neigh
# The following visualization code was obtained and modified from https://medium.com/@kbrook10/day-11-machine-learning-using-knn-k-nearest-neighbors-with-scikit-learn-350c3a1402e6
# Configure and plot error rate over k values
plt.figure(figsize=(20,5))
plt.plot(range(1,75), error_rate, color='blue', linestyle='dashed', marker='o', markerfacecolor='red', markersize=10)
plt.title('Error Rate vs. K-Values')
plt.xlabel('K-Values')
plt.ylabel('Error Rate')
plt.savefig("Error Rate vs k Values.png")
plt.show()
print("The KNN Classifier works best when there are ", str(best_knn), " neighbors.")
print("The accuracy in percentage corresponding to the best k values is ", best_score * 100)
In [31]:
yhat = neigh.predict(X_test)
from sklearn import metrics
from sklearn.metrics import log_loss
from sklearn.metrics import jaccard_similarity_score
from sklearn.metrics import f1_score
neigh = KNeighborsClassifier(n_neighbors=best_knn)
neigh.fit(X_train, y_train)
predictions = neigh.predict(X_test)
yhat_prob = neigh.predict_proba(X_test)
print("Log Loss:", log_loss(y_test, yhat_prob))
print("F1 Score:", f1_score(y_test, yhat, average='weighted'))
print("Jaccard Similarity:", jaccard_similarity_score(y_test, yhat))
In [ ]:
In [57]:
from sklearn import tree
dt_clf = tree.DecisionTreeClassifier(max_depth=5)
dt_clf = dt_clf.fit(X_train, y_train)
print(dt_clf.feature_importances_)
dt_clf.score(X=X_test, y=y_test)
Out[57]:
In [58]:
from sklearn.metrics import log_loss
from sklearn.metrics import jaccard_similarity_score
from sklearn.metrics import f1_score
yhat = dt_clf.predict(X_test)
yhat_prob = dt_clf.predict_proba(X_test)
print("Log Loss:", log_loss(y_test, yhat_prob))
print("F1 Score:", f1_score(y_test, yhat, average='weighted'))
print("Jaccard Similarity:", jaccard_similarity_score(y_test, yhat))
In [59]:
# notice: installing seaborn might takes a few minutes
!conda install -c anaconda graphviz pydotplus -y
In [68]:
from sklearn.tree import export_graphviz
from sklearn.externals.six import StringIO
from IPython.display import Image
import pydotplus
# The following visualization code was obtained and modified from https://www.datacamp.com/community/tutorials/decision-tree-classification-python
dot_data = StringIO()
export_graphviz(dt_clf, out_file=dot_data,
feature_names=['Principal','terms','age','Gender','weekend', 'Bechalor', 'High School or Below', 'college'],
class_names='loan_status',
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png('DT Classifier Graph.png')
Image(graph.create_png())
Out[68]:
In [ ]:
In [43]:
from sklearn import svm
svm_clf = svm.SVC(probability=True)
svm_clf.fit(X_train, y_train)
svm_clf.score(X_test, y_test)
Out[43]:
In [44]:
from sklearn.metrics import log_loss
from sklearn.metrics import jaccard_similarity_score
from sklearn.metrics import f1_score
yhat = svm_clf.predict(X_test)
yhat_prob = svm_clf.predict_proba(X_test)
print("Log Loss:", log_loss(y_test, yhat_prob))
print("F1 Score:", f1_score(y_test, yhat, average='weighted'))
print("Jaccard Similarity:", jaccard_similarity_score(y_test, yhat))
In [ ]:
In [69]:
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
lr_clf = LogisticRegression(random_state=0, solver='lbfgs', multi_class='ovr').fit(X_train, y_train)
lr_clf.score(X_test, y_test)
Out[69]:
In [ ]:
In [72]:
from sklearn.metrics import log_loss
from sklearn.metrics import jaccard_similarity_score
from sklearn.metrics import f1_score
from sklearn import metrics
yhat = lr_clf.predict(X_test)
yhat_prob = lr_clf.predict_proba(X_test)
yhat_plot_prob = yhat_prob[::,1]
print("Log Loss:", log_loss(y_test, yhat_prob))
print("F1 Score:", f1_score(y_test, yhat, average='weighted'))
print("Jaccard Similarity:", jaccard_similarity_score(y_test, yhat))
In [75]:
# The Following code was obtained and modified from https://www.datacamp.com/community/tutorials/understanding-logistic-regression-python
fpr, tpr, _ = metrics.roc_curve(y_test, yhat_plot_prob)
auc = metrics.roc_auc_score(y_test, yhat_plot_prob)
plt.plot(fpr, tpr, label="data 1, auc="+str(auc))
plt.legend(loc=4)
plt.xlabel("False Positive Rate")
plt.xlabel("True Positive Rate")
plt.title("ROC Curve")
plt.show()
In [47]:
from sklearn.metrics import jaccard_similarity_score
from sklearn.metrics import f1_score
from sklearn.metrics import log_loss
First, download and load the test set:
In [48]:
!wget -O loan_test.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv
In [49]:
test_df = pd.read_csv('loan_test.csv')
test_df.head()
Out[49]:
In [50]:
old_df = df
In [51]:
df = test_df
df['due_date'] = pd.to_datetime(df['due_date'])
df['effective_date'] = pd.to_datetime(df['effective_date'])
df.head()
df['loan_status'].value_counts()
df['dayofweek'] = df['effective_date'].dt.dayofweek
df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)
df.head()
df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)
df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
df.head()
df.groupby(['education'])['loan_status'].value_counts(normalize=True)
df[['Principal','terms','age','Gender','education']].head()
df.head()
Out[51]:
In [52]:
test_df = df
test_Feature = test_df[['Principal','terms','age','Gender','weekend']]
test_Feature = pd.concat([test_Feature,pd.get_dummies(test_df['education'])], axis=1)
test_Feature.drop(['Master or Above'], axis = 1,inplace=True)
test_Feature.dropna(thresh=2)
test_Feature.head()
Out[52]:
In [53]:
test_x = test_Feature
test_y = test_df['loan_status'].values
test_x = preprocessing.StandardScaler().fit(test_x).transform(test_x)
test_temp_y = (test_y == 'PAIDOFF')
test_y = test_temp_y.astype(int)
In [54]:
test_x[0:5]
Out[54]:
In [55]:
test_y
Out[55]:
In [56]:
for classifier in [neigh, dt_clf, svm_clf, lr_clf]:
print("Classifier:", type(classifier))
yhat = classifier.predict(test_x)
yhat_prob = classifier.predict_proba(test_x)
print("LogLoss:", log_loss(test_y, yhat_prob))
print("F1-Score:", f1_score(test_y, yhat, average='weighted'))
print("Jaccard:", jaccard_similarity_score(test_y, yhat))
In [ ]:
In [ ]:
| Algorithm | Jaccard | F1-score | LogLoss |
|---|---|---|---|
| KNN | 0.74 | 0.63 | NA |
| Decision Tree | 0.69 | 0.68 | NA |
| SVM | 0.76 | 0.74 | NA |
| LogisticRegression | 0.72 | 0.70 | 0.48 |
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: SPSS Modeler
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at Watson Studio
Saeed Aghabozorgi, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.
Copyright © 2018 Cognitive Class. This notebook and its source code are released under the terms of the MIT License.